141 research outputs found
Memory transfer optimization for a lattice Boltzmann solver on Kepler architecture nVidia GPUs
The Lattice Boltzmann method (LBM) for solving fluid flow is naturally well
suited to an efficient implementation for massively parallel computing, due to
the prevalence of local operations in the algorithm. This paper presents and
analyses the performance of a 3D lattice Boltzmann solver, optimized for third
generation nVidia GPU hardware, also known as `Kepler'. We provide a review of
previous optimisation strategies and analyse data read/write times for
different memory types. In LBM, the time propagation step (known as streaming),
involves shifting data to adjacent locations and is central to parallel
performance; here we examine three approaches which make use of different
hardware options. Two of which make use of `performance enhancing' features of
the GPU; shared memory and the new shuffle instruction found in Kepler based
GPUs. These are compared to a standard transfer of data which relies instead on
optimised storage to increase coalesced access. It is shown that the more
simple approach is most efficient; since the need for large numbers of
registers per thread in LBM limits the block size and thus the efficiency of
these special features is reduced. Detailed results are obtained for a D3Q19
LBM solver, which is benchmarked on nVidia K5000M and K20C GPUs. In the latter
case the use of a read-only data cache is explored, and peak performance of
over 1036 Million Lattice Updates Per Second (MLUPS) is achieved. The
appearance of a periodic bottleneck in the solver performance is also reported,
believed to be hardware related; spikes in iteration-time occur with a
frequency of around 11Hz for both GPUs, independent of the size of the problem.Comment: 12 page
Assessment of a common nonlinear eddy-viscosity turbulence model in capturing laminarization in mixed convection flows
Laminarization is an important topic in heat transfer and turbulence modeling.
Recent studies have demonstrated that several well-known turbulence models
failed to provide accurate prediction when applied to mixed convection flows
with significant re-laminarization effects. One of those models, a well-validated
cubic nonlinear eddy-viscosity model, was observed to miss this feature entirely.
This paper studies the reasons behind this failure by providing a detailed
comparison with the baseline Launder–Sharma model. The difference is attributed
to the method of near-wall damping. A range of tests have been conducted and
two noteworthy findings are reported for the case of flow re-laminarization
Assessment of RANS and DES methods for realistic automotive models
This paper presents a comprehensive investigation of RANS and DES models for the Ahmed car body and a realistic automotive vehicle; the DrivAer model. A variety of RANS models, from the 1-equation Spalart Allmaras model to a low-Reynolds number Reynolds Stress model have shown an inability to consistently correctly capture the flow field for both the Ahmed car body and DrivAer model, with the under-prediction of the turbulence in the initial separated shear layer found as a key deficiency. It has been shown that the use of a hybrid RANS-LES model (in this case, Detached Eddy Simulation) offers an advantage over RANS models in terms of the force coefficients, and general flow field for both the Ahmed car body and the DrivAer model. However, for both cases even at the finest mesh level hybrid RANS-LES methods still exhibited inaccuracies. Suggestions are made on possible improvements, in particular on the use of embedded LES with synthetic turbulence generation. Finally the computational cost of each approach is compared, which shows that whilst hybrid RANS-LES offer a clear benefit over RANS models for automotive relevant flows they do so at a much increased cost
- …